Goto

Collaborating Authors

 Exeter



Why the world's militaries are scrambling to create their own Starlink

New Scientist

Why the world's militaries are scrambling to create their own Starlink The reliable internet connections provided by Starlink offer a huge advantage on the battlefield. Starlink's satellite constellation provides a reliable internet connection to almost anywhere on Earth, conferring an advantage on the modern battlefield. But it is also run by controversial billionaire Elon Musk, presenting a risk to militaries that could easily find themselves cut off. So, now countries are racing to build their own version. The Starlink network consists of almost 10,000 satellites that offer internet connections across most of the planet via small dishes on the ground.


Why the US is using a cheap Iranian drone against the country itself

New Scientist

The US and Iran are trading blows in the Gulf with a simple drone that costs as little as $50,000 to make. But why is a slow, cheap and relatively primitive drone seeing use in 2026 alongside hypersonic missiles and stealth jets? Iran invented the relatively simple Shahed 136 attack drone, but is now fending off US copies launched against it in combat. Why, when the US military has expensive, cutting-edge and hi-tech weapons, is it making flimsy drones powered by a motorbike engine? Iranian company Shahed Aviation Industries originally designed the 136.





Towards 6G Native-AI Edge Networks: A Semantic-Aware and Agentic Intelligence Paradigm

Feng, Chenyuan, Zhang, Anbang, Min, Geyong, Huang, Yongming, Quek, Tony Q. S., You, Xiaohu

arXiv.org Artificial Intelligence

The evolution toward sixth-generation wireless systems positions intelligence as a native network capability, fundamentally transforming the design of radio access networks (RANs). Within this vision, Semantic-native communication and agentic intelligence are expected to play central roles. SemCom departs from bit-level fidelity and instead emphasizes task-oriented meaning exchange, enabling compact SC and introducing new performance measures such as semantic fidelity and task success rate. Agentic intelligence endows distributed RAN entities with goal-driven autonomy, reasoning, planning, and multi-agent collaboration, increasingly supported by foundation models and knowledge graphs. In this work, we first introduce the conceptual foundations of SemCom and agentic networking, and discuss why existing AI-driven O-RAN solutions remain largely bit-centric and task-siloed. We then present a unified taxonomy that organizes recent research along three axes: i) semantic abstraction level (symbol/feature/intent/knowledge), ii) agent autonomy and coordination granularity (single-, multi-, and hierarchical-agent), and iii) RAN control placement across PHY/MAC, near-real-time RIC, and non-real-time RIC. Based on this taxonomy, we systematically introduce enabling technologies including task-oriented semantic encoders/decoders, multi-agent reinforcement learning, foundation-model-assisted RAN agents, and knowledge-graph-based reasoning for cross-layer awareness. Representative 6G use cases, such as immersive XR, vehicular V2X, and industrial digital twins, are analyzed to illustrate the semantic-agentic convergence in practice. Finally, we identify open challenges in semantic representation standardization, scalable trustworthy agent coordination, O-RAN interoperability, and energy-efficient AI deployment, and outline research directions toward operational semantic-agentic AI-RAN.


The promising potential of vision language models for the generation of textual weather forecasts

Steele, Edward C. C., Mane, Dinesh, Monti, Emilio, Orus, Luis, Chantrill-Cheyette, Rebecca, Couch, Matthew, Dale, Kirstine I., Eaton, Simon, Rangarajan, Govindarajan, Majlesi, Amir, Ramsdale, Steven, Sharpe, Michael, Smith, Craig, Smith, Jonathan, Yates, Rebecca, Ellis, Holly, Ewen, Charles

arXiv.org Artificial Intelligence

Despite the promising capability of multimodal foundation models, their application to the generation of meteorological products and services remains nascent. To accelerate aspiration and adoption, we explore the novel use of a vision language model for writing the iconic Shipping Forecast text directly from video-encoded gridded weather data. These early results demonstrate promising scalable technological opportunities for enhancing production efficiency and service innovation within the weather enterprise and beyond.


Reasoning-Aware Multimodal Fusion for Hateful Video Detection

Yang, Shuonan, Chen, Tailin, Yue, Jiangbei, Cheng, Guangliang, Jiao, Jianbo, Fu, Zeyu

arXiv.org Artificial Intelligence

Hate speech in online videos is posing an increasingly serious threat to digital platforms, especially as video content becomes increasingly multimodal and context-dependent. Existing methods often struggle to effectively fuse the complex semantic relationships between modalities and lack the ability to understand nuanced hateful content. To address these issues, we propose an innovative Reasoning-Aware Multimodal Fusion (RAMF) framework. To tackle the first challenge, we design Local-Global Context Fusion (LGCF) to capture both local salient cues and global temporal structures, and propose Semantic Cross Attention (SCA) to enable fine-grained multimodal semantic interaction. To tackle the second challenge, we introduce adversarial reasoning-a structured three-stage process where a vision-language model generates (i) objective descriptions, (ii) hate-assumed inferences, and (iii) non-hate-assumed inferences-providing complementary semantic perspectives that enrich the model's contextual understanding of nuanced hateful intent. Evaluations on two real-world hateful video datasets demonstrate that our method achieves robust generalisation performance, improving upon state-of-the-art methods by 3% and 7% in Macro-F1 and hate class recall, respectively. We will release the code after the anonymity period ends.


PriVi: Towards A General-Purpose Video Model For Primate Behavior In The Wild

Mueller, Felix B., Meier, Jan F., Lueddecke, Timo, Vogg, Richard, Freixanet, Roger L., Hassler, Valentin, Bosshard, Tiffany, Karakoc, Elif, O'Hearn, William J., Pereira, Sofia M., Sehner, Sandro, Wierucka, Kaja, Burkart, Judith, Fichtel, Claudia, Fischer, Julia, Gail, Alexander, Hobaiter, Catherine, Ostner, Julia, Samuni, Liran, Schülke, Oliver, Shahidi, Neda, Wessling, Erin G., Ecker, Alexander S.

arXiv.org Artificial Intelligence

Non-human primates are our closest living relatives, and analyzing their behavior is central to research in cognition, evolution, and conservation. Computer vision could greatly aid this research, but existing methods often rely on human-centric pretrained models and focus on single datasets, which limits generalization. W e address this limitation by shifting from a model-centric to a data-centric approach and introduce PriVi, a large-scale primate-centric video pretraining dataset. PriVi contains 424 hours of curated video, combining 174 hours from behavioral research across 11 settings with 250 hours of diverse web-sourced footage, assembled through a scalable data cura-tion pipeline. W e continue pretraining V-JEP A, a large-scale video model, on PriVi to learn primate-specific representations and evaluate it using a lightweight frozen classifier . Across four benchmark datasets - ChimpACT, PanAf500, BaboonLand, and ChimpBehave - our approach consistently outperforms prior work, including fully fine-tuned baselines, and scales favorably with fewer labels. These results demonstrate that primate-centric pretraining substantially improves data efficiency and generalization, making it a promising approach for low-label applications. Code, models, and the majority of the dataset will be made available.